视觉变压器网络在许多计算机视觉任务中显示出优越性。在本文中,我们通过在基于信息的基于能量检测之前提出具有潜在变量的新型生成视觉变压器进一步逐步。视觉变压器网络和基于能量的先前模型都是通过Markov链蒙特卡罗的最大似然估计共同训练,其中来自居民后的静缘和先前分布的采样由Langevin Dynamics进行。此外,对于生成视觉变压器,我们可以容易地从图像中获得像素明智的不确定性图,该图像指示对从图像预测显着性的模型置信度。与现有的生成模型不同,该模型定义了潜在变量的先前分配作为简单的各向同性高斯分布,我们的模型使用基于能量的信息性,以捕获数据的潜在空间更具表现力。我们将建议的框架应用于RGB和RGB-D突出对象检测任务。广泛的实验结果表明,我们的框架不仅可以达到准确的显着性预测,而且可以实现与人类感知一致的有意义的不确定性地图。
translated by 谷歌翻译
虽然最近基于模型的盲目单图像超分辨率(SISR)的研究已经取得了巨大的成功,但大多数人都不认为图像劣化。首先,它们总是假设图像噪声obeys独立和相同分布的(i.i.d.)高斯或拉普拉斯分布,这在很大程度上低估了真实噪音的复杂性。其次,以前的常用核前沿(例如,归一化,稀疏性)不足以保证理性内核解决方案,从而退化后续SISR任务的性能。为了解决上述问题,本文提出了一种基于模型的盲人SISR方法,该方法在概率框架下,从噪声和模糊内核的角度精心模仿图像劣化。具体而言,而不是传统的i.i.d.噪声假设,基于补丁的非i.i.d。提出噪声模型来解决复杂的真实噪声,期望增加噪声表示模型的自由度。至于模糊内核,我们新建构建一个简洁但有效的内核生成器,并将其插入所提出的盲人SISR方法作为明确的内核(EKP)。为了解决所提出的模型,专门设计了理论上接地的蒙特卡罗EM算法。综合实验证明了我们对综合性和实时数据集的最新技术的方法的优越性。
translated by 谷歌翻译
常规的显着性预测模型通常会学习从图像到其显着图的确定性映射,因此无法解释人类注意力的主观性质。在本文中,为了模拟视觉显着性的不确定性,我们通过在给定输入图像上学习有条件的概率分布来研究显着性预测问题,并将其视为从显着图中的有条件预测问题,并将显着性预测视为从该过程中的样本预测。学会的分布。具体而言,我们提出了一个生成合作的显着性预测框架,其中有条件的潜在变量模型(LVM)和有条件的基于能量的模型(EBM)经过共同训练以以合作的方式预测显着物体。 LVM用作快速但粗糙的预测指标,可有效地生成初始显着图,然后通过EBM的迭代langevin修订将其作为缓慢但良好的预测指标进行完善。如此粗略的合作显着性预测策略提供了两者中最好的。此外,我们提出了“恢复合作学习”策略,并将其应用于弱监督的显着性预测,其中部分观察到了训练图像的显着性注释。最后,我们发现EBM中学习的能量函数可以用作改进模块,可以完善其他预训练的显着性预测模型的结果。实验结果表明,我们的模型可以生成图像的一组不同和合理的显着性图,并在完全监督和弱监督的显着性预测任务中获得最先进的性能。
translated by 谷歌翻译
由于难以应变的分区功能,通过最大可能性培训基于能量的模型(EBMS)需要Markov链蒙特卡罗(MCMC)采样,以近似数据和模型分布之间的kullback-Leibler发散的梯度。然而,由于模式之间的混合难以混合,因此从EBM中的样本是不普遍的。在本文中,我们建议学习变形式自动编码器(VAE)以初始化有限步骤MCMC,例如源自能量函数的Langevin动态,用于EBM的有效摊销采样。利用这些倒置的MCMC样品,可以通过最大似然训练EBM,其遵循“通过合成分析”方案;虽然VAE通过变分贝叶斯从这些MCMC样品中学习。我们称之为该联合训练算法的变分MCMC教学,其中VAE将ebm追溯到数据分布。我们将学习算法解释为信息几何上下文中的动态交替投影。我们所提出的模型可以生成与GANS和EBM相当的样本。此外,我们证明我们的模型可以了解有效的概率分布对受监督的条件学习任务。
translated by 谷歌翻译
了解网格单元如何执行路径集成计算仍然是一个根本的问题。在本文中,我们对网格单元进行了对路径集成的一般表示模型的理论分析,其中2D自身位被编码为更高的尺寸向量,并且通过向量的一般转换表示2D自动。我们确定转型的两个条件。一个是路径集成所必需的组表示条件。另一个是一种各向同性的缩放条件,可确保局部共形地嵌入,使得向量表示中的误差符合在2D自身位置中的误差。然后,我们调查最简单的转换,即线性变换,将其显式代数和几何结构揭示为矩阵旋转,并探索各向同性缩放条件与特殊类六角网格图案之间的连接。最后,通过基于优化的方法,我们可以学习六边形网格模式,该网格图案在啮齿动物大脑中共享网格细胞的相似性质。学习模型能够准确地长距离路径集成。代码可在https://github.com/ruiqigao/grid-cell-path中获得。
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems. It is assumed that constraint function values and derivatives are available, but only stochastic approximations of the objective function and its associated derivatives can be computed via inexact probabilistic zeroth- and first-order oracles. Under reasonable assumptions, a high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived. Numerical results on standard nonlinear optimization test problems illustrate the advantages and limitations of our proposed method.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译
Automatic font generation without human experts is a practical and significant problem, especially for some languages that consist of a large number of characters. Existing methods for font generation are often in supervised learning. They require a large number of paired data, which are labor-intensive and expensive to collect. In contrast, common unsupervised image-to-image translation methods are not applicable to font generation, as they often define style as the set of textures and colors. In this work, we propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++). We introduce a feature deformation skip connection (FDSC) to learn local patterns and geometric transformations between fonts. The FDSC predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level content feature maps. The outputs of FDSC are fed into a mixer to generate final results. Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts. To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently. In addition to adversarial loss, another two reconstruction losses are adopted to constrain the domain-invariant characteristics between generated images and content images. Taking advantage of FDSC and the adopted loss functions, our model is able to maintain spatial information and generates high-quality character images in an unsupervised manner. Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
translated by 谷歌翻译